图形自动编码器在嵌入基于图的数据集方面有效。大多数图形自动编码器体系结构都具有较浅的深度,这些深度限制了它们捕获由多支架隔开的节点之间有意义关系的能力。在本文中,我们提出了残留的变分图自动编码器Resvgae,这是一种具有多个残差模块的深度变分图自动编码器模型。我们表明,我们的多个残差模块,具有残差连接的卷积层,提高了图自动编码器的平均精度。实验结果表明,与其他最先进的方法相比,我们提出的剩余模块的模型优于没有残留模块的模型,并获得了相似的结果。
translated by 谷歌翻译
One of the most efficient methods for model compression is hint distillation, where the student model is injected with information (hints) from several different layers of the teacher model. Although the selection of hint points can drastically alter the compression performance, conventional distillation approaches overlook this fact and use the same hint points as in the early studies. Therefore, we propose a clustering based hint selection methodology, where the layers of teacher model are clustered with respect to several metrics and the cluster centers are used as the hint points. Our method is applicable for any student network, once it is applied on a chosen teacher network. The proposed approach is validated in CIFAR-100 and ImageNet datasets, using various teacher-student pairs and numerous hint distillation methods. Our results show that hint points selected by our algorithm results in superior compression performance compared to state-of-the-art knowledge distillation algorithms on the same student models and datasets.
translated by 谷歌翻译